101 research outputs found

    Multi-Information Source Fusion and Optimization to Realize ICME: Application to Dual Phase Materials

    Get PDF
    Integrated Computational Materials Engineering (ICME) calls for the integration of computational tools into the materials and parts development cycle, while the Materials Genome Initiative (MGI) calls for the acceleration of the materials development cycle through the combination of experiments, simulation, and data. As they stand, both ICME and MGI do not prescribe how to achieve the necessary tool integration or how to efficiently exploit the computational tools, in combination with experiments, to accelerate the development of new materials and materials systems. This paper addresses the first issue by putting forward a framework for the fusion of information that exploits correlations among sources/models and between the sources and `ground truth'. The second issue is addressed through a multi-information source optimization framework that identifies, given current knowledge, the next best information source to query and where in the input space to query it via a novel value-gradient policy. The querying decision takes into account the ability to learn correlations between information sources, the resource cost of querying an information source, and what a query is expected to provide in terms of improvement over the current state. The framework is demonstrated on the optimization of a dual-phase steel to maximize its strength-normalized strain hardening rate. The ground truth is represented by a microstructure-based finite element model while three low fidelity information sources---i.e. reduced order models---based on different homogenization assumptions---isostrain, isostress and isowork---are used to efficiently and optimally query the materials design space.Comment: 19 pages, 11 figures, 5 table

    A MATHEMATICAL AND COMPUTATIONAL FRAMEWORK FOR MULTIFIDELITY DESIGN AND ANALYSIS WITH COMPUTER MODELS

    Get PDF
    A multifidelity approach to design and analysis for complex systems seeks to exploit optimally all available models and data. Existing multifidelity approaches generally attempt to calibrate low-fidelity models or replace low-fidelity analysis results using data from higher fidelity analyses. This paper proposes a fundamentally different approach that uses the tools of estimation theory to fuse together information from multifidelity analyses, resulting in a Bayesian-based approach to mitigating risk in complex system design and analysis. This approach is combined with maximum entropy characterizations of model discrepancy to represent epistemic uncertainties due to modeling limitations and model assumptions. Mathematical interrogation of the uncertainty in system output quantities of interest is achieved via a variance-based global sensitivity analysis, which identifies the primary contributors to output uncertainty and thus provides guidance for adaptation of model fidelity. The methodology is applied to multidisciplinary design optimization and demonstrated on a wing-sizing problem for a high altitude, long endurance vehicle.United States. Air Force Office of Scientific Research. Small Business Technology Transfer Program (Contract FA9550-09-C-0128

    Distributional sensitivity analysis

    Get PDF
    Among the uses for global sensitivity analysis is factor prioritization. A key assumption for this is that a given factor can, through further research, be fixed to some point on its domain. For factors containing epistemic uncertainty, this is an optimistic assumption, which can lead to inappropriate resource allocation. Thus, this research develops an original method, referred to as distributional sensitivity analysis, that considers which factors would on average cause the greatest reduction in output variance, given that the portion of a particular factor's variance that can be reduced is a random variable. A key aspect of the method is that the analysis is performed directly on the samples that were generated during a global sensitivity analysis using acceptance/rejection sampling. In general, if for each factor, N model runs are required for a global sensitivity analysis, then those same N model runs are sufficient for a distributional sensitivity analysis

    Surrogate Modeling for Uncertainty Assessment with Application to Aviation Environmental System Models

    Get PDF
    Numerical simulation models to support decision-making and policy-making processes are often complex, involving many disciplines, many inputs, and long computation times. Inputs to such models are inherently uncertain, leading to uncertainty in model outputs. Characterizing, propagating, and analyzing this uncertainty is critical both to model development and to the effective application of model results in a decision-making setting; however, the many thousands of model evaluations required to sample the uncertainty space (e.g., via Monte Carlo sampling) present an intractable computational burden. This paper presents a novel surrogate modeling methodology designed specifically for propagating uncertainty from model inputs to model outputs and for performing a global sensitivity analysis, which characterizes the contributions of uncertainties in model inputs to output variance, while maintaining the quantitative rigor of the analysis by providing confidence intervals on surrogate predictions. The approach is developed for a general class of models and is demonstrated on an aircraft emissions prediction model that is being developed and applied to support aviation environmental policy-making. The results demonstrate how the confidence intervals on surrogate predictions can be used to balance the tradeoff between computation time and uncertainty in the estimation of the statistical outputs of interest.United States. Federal Aviation Administration (contract no. DTFAWA-05-D-00012, Task Order 0002

    A physics-based emissions model for aircraft gas turbine combustors

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2006.Includes bibliographical references (p. 103-105).In this thesis, a physics-based model of an aircraft gas turbine combustor is developed for predicting NO. and CO emissions. The objective of the model is to predict the emissions of current and potential future gas turbine engines within quantified uncertainty bounds for the purpose of assessing design tradeoffs and interdependencies in a policy-making setting. The approach taken is to capture the physical relationships among operating conditions, combustor design parameters, and pollutant emissions. The model is developed using only high-level combustor design parameters and ideal reactors. The predictive capability of the model is assessed by comparing model estimates of NO, and CO emissions from five different industry combustors to certification data. The model developed in this work correctly captures the physical relationships between engine operating conditions, combustor design parameters, and NO. and CO emissions. The NO. estimates are as good as, or better than, the NO. estimates from an established empirical model; and the CO estimates are within the uncertainty in the certification data at most of the important low power operating conditions.by Douglas L. Allaire.S.M

    Uncertainty assessment of complex models with application to aviation environmental systems

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2009.Includes bibliographical references (p. 131-136).Numerical simulation models that support decision-making and policy-making processes are often complex, involving many disciplines, and long computation times. These models typically have many factors of different character, such as operational, design-based, technological, and economics-based. Such factors generally contain uncertainty, which leads to uncertainty in model outputs. For such models, it is critical to both the application of model results and the future development of the model that uncertainty be properly assessed. This thesis presents a comprehensive approach to the uncertainty assessment of complex models intended to support decision- and policy-making processes. The approach consists of seven steps, which are establishing assessment goals, documenting assumptions and limitations, documenting model factors and outputs, classifying and characterizing factor uncertainty, conducting uncertainty analysis, conducting sensitivity analysis, and presenting results. Factor uncertainty is represented probabilistically, characterized by the principle of maximum uncertainty, and propagated via Monte Carlo simulation. State-of-the-art methods of global sensitivity analysis are employed to apportion model output variance across model factors, and a fundamental extension of global sensitivity analysis, termed distributional sensitivity analysis, is developed to determine on which factors future research should focus to reduce output variability.(cont.) The complete approach is demonstrated on a real-world model intended to estimate the impacts of aviation on climate change in support of decision- and policy-making, where it is established that a systematic approach to uncertainty assessment is critical to the proper application and future development of complex models. A novel surrogate modeling methodology designed specifically for uncertainty assessment is also presented and demonstrated for an aircraft emissions prediction model that is being developed and applied to support aviation environmental policy-making. The results demonstrate how confidence intervals on surrogate model predictions can be used to balance the tradeoff between computation time and uncertainty in the estimation of statistical outputs of interest in uncertainty assessment.by Douglas Lawrence Allaire.Ph.D

    A decomposition-based approach to uncertainty analysis of feed-forward multicomponent systems

    Get PDF
    To support effective decision making, engineers should comprehend and manage various uncertainties throughout the design process. Unfortunately, in today's modern systems, uncertainty analysis can become cumbersome and computationally intractable for one individual or group to manage. This is particularly true for systems comprised of a large number of components. In many cases, these components may be developed by different groups and even run on different computational platforms. This paper proposes an approach for decomposing the uncertainty analysis task among the various components comprising a feed-forward system and synthesizing the local uncertainty analyses into a system uncertainty analysis. Our proposed decomposition-based multicomponent uncertainty analysis approach is shown to be provably convergent in distribution under certain conditions. The proposed method is illustrated on quantification of uncertainty for a multidisciplinary gas turbine system and is compared to a traditional system-level Monte Carlo uncertainty analysis approach.SUTD-MIT International Design CentreUnited States. Defense Advanced Research Projects Agency. META Program (United States. Air Force Research Laboratory Contract FA8650-10-C-7083)Vanderbilt University (Contract VU-DSR#21807-S7)United States. Federal Aviation Administration. Office of Environment and Energy (FAA Award 09-C-NE-MIT, Amendments 028, 033, and 038

    Optimal L2-norm empirical importance weights for the change of probability measure

    Get PDF
    This work proposes an optimization formulation to determine a set of empirical importance weights to achieve a change of probability measure. The objective is to estimate statistics from a target distribution using random samples generated from a (different) proposal distribution. This work considers the specific case in which the proposal distribution from which the random samples are generated is unknown; that is, we have available the samples but no explicit description of their underlying distribution. In this setting, the Radon–Nikodym theorem provides a valid but indeterminable solution to the task, since the distribution from which the random samples are generated is inaccessible. The proposed approach employs the well-defined and determinable empirical distribution function associated with the available samples. The core idea is to compute importance weights associated with the random samples, such that the distance between the weighted proposal empirical distribution function and the desired target distribution function is minimized. The distance metric selected for this work is the L[subscript 2] -norm and the importance weights are constrained to define a probability measure. The resulting optimization problem is shown to be a single linear equality and box-constrained quadratic program. This problem can be solved efficiently using optimization algorithms that scale well to high dimensions. Under some conditions restricting the class of distribution functions, the solution of the optimization problem is shown to result in a weighted proposal empirical distribution function that converges to the target distribution function in the L[subscript 1] -norm, as the number of samples tends to infinity. Results on a variety of test cases show that the proposed approach performs well in comparison with other well-known approaches.Singapore University of Technology and Design. International Design CenterUnited States. Defense Advanced Research Projects Agency (META program through AFRL Contract FA8650-10-C-7083 and Vanderbilt University Contract VUDSR#21807-S7)United States. Federal Aviation Administration. Office of Environment and Energy (FAA Award No. 09-C-NE-MIT, Amendment Nos. 028, 033, and 038
    • …
    corecore